首页> 外文OA文献 >Virtual Adversarial Training: a Regularization Method for Supervised and Semi-supervised Learning
【2h】

Virtual Adversarial Training: a Regularization Method for Supervised and Semi-supervised Learning

机译:虚拟对抗训练:一种监督和监督的正规化方法   半监督学习

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

We propose a new regularization method based on virtual adversarial loss: anew measure of local smoothness of the output distribution. Virtual adversarialloss is defined as the robustness of the model's posterior distribution againstlocal perturbation around each input data point. Our method is similar toadversarial training, but differs from adversarial training in that itdetermines the adversarial direction based only on the output distribution andthat it is applicable to a semi-supervised setting. Because the directions inwhich we smooth the model are virtually adversarial, we call our method virtualadversarial training (VAT). The computational cost of VAT is relatively low.For neural networks, the approximated gradient of virtual adversarial loss canbe computed with no more than two pairs of forward and backpropagations. In ourexperiments, we applied VAT to supervised and semi-supervised learning onmultiple benchmark datasets. With additional improvement based on entropyminimization principle, our VAT achieves the state-of-the-art performance onSVHN and CIFAR-10 for semi-supervised learning tasks.
机译:我们提出了一种基于虚拟对抗损失的正则化方法:一种对输出分布的局部平滑度的新度量。虚拟对抗损失定义为模型后验分布对每个输入数据点周围局部扰动的鲁棒性。我们的方法类似于对抗训练,但是与对抗训练的不同之处在于它仅根据输出分布确定对抗方向,并且适用于半监督环境。由于平滑模型的方向实际上是对抗性的,因此我们将方法称为虚拟对抗性训练(VAT)。增值税的计算成本相对较低。对于神经网络,可以使用不超过两对的正向和反向传播来计算虚拟对抗损失的近似梯度。在我们的实验中,我们将VAT应用于多个基准数据集的监督和半监督学习。通过基于熵最小化原理的进一步改进,我们的增值税在SVHN和CIFAR-10上达到了半监督学习任务的最新性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号